https://newsletter.en.creamermedia.com
Africa|Health|Power|Technology
Africa|Health|Power|Technology
africa|health|power|technology

ChatGPT’s human toll

18th July 2025

By: Martin Zhuwakinyu

Creamer Media Senior Deputy Editor

     

Font size: - +

Generative AI tools like OpenAI’s ChatGPT are hailed as some of the most disruptive innovations in recent memory, promising – or already delivering – sweeping gains across countless fields. But lurking behind the marvel is a murky underbelly that rarely gets its moment in the spotlight.

Africa has not been spared the darker side of ChatGPT. Consider this: users are protected from exposure to graphic or harmful content, even though the model is trained on hundreds of billions of words scrapped from the Internet – a trove that includes violent, explicit and gender or racially biased material. This sanitisation doesn’t just happen – it requires human moderators who sift through the worst of it to ensure it never reaches the user.

We now know, from reportage in authoritative publications such as Time magazine that OpenAI hired a San Francisco-based company named Sama to train the ChatGPT model to identify undesirable content – to keep it away from users. In turn, Sama outsourced this work to content moderators in Nairobi, Kenya. This might have been a boon, given Africa’s high unemployment rate. But the trouble is, the content moderators were paid a paltry $2 an hour – hardly fair remuneration for someone with a degree in computer science or IT.

The peanuts Sama’s contractors in Nairobi received was not the only challenge they faced. Having to read the kind of content OPenAI wants removed from its ChatGPT tool – for hours on end every day, five days a week – clearly isn’t good for anyone’s mental health. Footage of the war in Ukraine or suicide bombings, for instance, would be among the kind of stuff they had to watch and train ChatGPT on.

I’ve read of some of the content moderators complaining of episodes of insomnia, anxiety, depression and panic attacks, which they put down to their work at Sama. One told the UK’s The Guardian newspaper in 2023 of reviewing up to 700 text passages daily, many depicting sexual violence. Some of these, he said, would flash into his mind when he was alone or trying to sleep.

But it’s not only OpenAI – through Sama, its on-the-ground surrogate in Africa before their relationship ended – that has faced allegations of subjecting content moderators on the continent to poor pay and grim working conditions. Meta, owner of social media platforms including Facebook, Instagram and Messenger, has also been in the crosshairs.

The tech giant is being sued by former workers who allege they now have mental health issues. As reported by The Guardian in December, one told witness psychiatrists she worked on one video that showed a man being dismembered limb by limb until he died. Other content she had to moderate included videos of summary executions during Ethiopia’s civil war.

So strongly do the content moderators in Kenya and elsewhere on the continent feel hard done by that they established a representative body, the African Content Moderators Union, in 2023.

Uchechuku Ajuzieogu, founder and editor- in-chief of AyIgorith, a new publication that provides rigorous economic analysis of AI, wrote in an article posted on LinkedIn last month that the dark impact of AI extends to places such as the Democratic Republic of Congo, where children who should be in school – some as young as ten – work in dangerous artisanal mines extracting the cobalt to power the smartphones and data centres that make AI possible.

AI poses a danger to its users too. Citing research conducted by scientists at the Massachusetts Institute of Technology, in the US, psychiatrist Marlynn Wei wrote in the journal Psychology Today last month that the more one relied on AI assistance, the less oneʼs brain networks became engaged, especially those associated with memory, attention and executive function.

The researchers named this effect “cognitive debt” – a reference to how repeated reliance on AI may impair the cognitive processes behind independent thinking. This suggests that while AI may offer short-term gains in productivity and reduced mental effort, the long-term costs could be a decline in critical thinking, creativity, learning and memory.

In short, the hidden toll of generative AI – on African workers, vulnerable communities and even the minds of its users – demands far more scrutiny than it currently receives.

Edited by Martin Zhuwakinyu
Creamer Media Senior Deputy Editor

Comments

Showroom

Multotec
Multotec

Multotec, recognised industry leaders in metallurgy and process engineering help mining houses across the world process minerals more efficiently,...

VISIT SHOWROOM 
Willard
Willard

Rooted in the hearts of South Africans, combining technology and a quest for perfection to bring you a battery of peerless standing. Willard...

VISIT SHOWROOM 

Latest Multimedia

sponsored by

Magazine round up | 18 July 2025
Magazine round up | 18 July 2025
18th July 2025
Lanseria to get several major upgrades
Lanseria to get several major upgrades
16th July 2025

Option 1 (equivalent of R125 a month):

Receive a weekly copy of Creamer Media's Engineering News & Mining Weekly magazine
(print copy for those in South Africa and e-magazine for those outside of South Africa)
Receive daily email newsletters
Access to full search results
Access archive of magazine back copies
Access to Projects in Progress
Access to ONE Research Report of your choice in PDF format

Option 2 (equivalent of R375 a month):

All benefits from Option 1
PLUS
Access to Creamer Media's Research Channel Africa for ALL Research Reports, in PDF format, on various industrial and mining sectors including Electricity; Water; Energy Transition; Hydrogen; Roads, Rail and Ports; Coal; Gold; Platinum; Battery Metals; etc.

Already a subscriber?

Forgotten your password?

MAGAZINE & ONLINE

SUBSCRIBE

RESEARCH CHANNEL AFRICA

SUBSCRIBE

CORPORATE PACKAGES

CLICK FOR A QUOTATION







sq:7.041 7.14s - 172pq - 2rq
Subscribe Now